Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 130
Filtrar
1.
Nature ; 622(7984): 842-849, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37821699

RESUMO

Central nervous system tumours represent one of the most lethal cancer types, particularly among children1. Primary treatment includes neurosurgical resection of the tumour, in which a delicate balance must be struck between maximizing the extent of resection and minimizing risk of neurological damage and comorbidity2,3. However, surgeons have limited knowledge of the precise tumour type prior to surgery. Current standard practice relies on preoperative imaging and intraoperative histological analysis, but these are not always conclusive and occasionally wrong. Using rapid nanopore sequencing, a sparse methylation profile can be obtained during surgery4. Here we developed Sturgeon, a patient-agnostic transfer-learned neural network, to enable molecular subclassification of central nervous system tumours based on such sparse profiles. Sturgeon delivered an accurate diagnosis within 40 minutes after starting sequencing in 45 out of 50 retrospectively sequenced samples (abstaining from diagnosis of the other 5 samples). Furthermore, we demonstrated its applicability in real time during 25 surgeries, achieving a diagnostic turnaround time of less than 90 min. Of these, 18 (72%) diagnoses were correct and 7 did not reach the required confidence threshold. We conclude that machine-learned diagnosis based on low-cost intraoperative sequencing can assist neurosurgical decision-making, potentially preventing neurological comorbidity and avoiding additional surgeries.


Assuntos
Neoplasias do Sistema Nervoso Central , Tomada de Decisão Clínica , Aprendizado Profundo , Cuidados Intraoperatórios , Análise de Sequência de DNA , Criança , Humanos , Neoplasias do Sistema Nervoso Central/classificação , Neoplasias do Sistema Nervoso Central/diagnóstico , Neoplasias do Sistema Nervoso Central/genética , Neoplasias do Sistema Nervoso Central/cirurgia , Tomada de Decisão Clínica/métodos , Aprendizado Profundo/normas , Cuidados Intraoperatórios/métodos , Metilação , Estudos Retrospectivos , Análise de Sequência de DNA/métodos , Fatores de Tempo
2.
Eur J Radiol ; 162: 110771, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36948058

RESUMO

A robust cascaded deep learning framework with integrated hippocampal gray matter (HGM) probability map was developed to improve the hippocampus segmentation (called HGM-cNet) due to its significance in various neuropsychiatric disorders such as Alzheimer's disease (AD). Particularly, the HGM-cNet cascaded two identical convolutional neural networks (CNN), where each CNN was devised by incorporating Attention Block, Residual Block, and DropBlock into the typical encoder-decoder architecture. The two CNNs were skip-connected between encoder components at each scale. The adoption of the cascaded deep learning framework was to conveniently incorporate the HGM probability map with the feature map generated by the first CNN. Experiments on 135T1-weighted MRI scans and manual hippocampal labels from publicly available ADNI-HarP dataset demonstrated that the proposed HGM-cNet outperformed seven multi-atlas-based hippocampus segmentation methods and six deep learning methods under comparison in most evaluation metrics. The Dice (average > 0.89 for both left and right hippocampus) was increased by around or more than 1% over other methods. The HGM-cNet also achieved a superior hippocampus segmentation performance in each group of cognitive normal, mild cognitive impairment, and AD. The stability, conveniences and generalizability of the cascaded deep learning framework with integrated HGM probability map in improving hippocampus segmentation was validated by replacing the proposed CNN with 3D-UNet, Atten-UNet, HippoDeep, QuickNet, DeepHarp, and TransBTS models. The integration of the HGM probability map in the cascaded deep learning framework was also demonstrated to facilitate capturing hippocampal atrophy more accurately than alternative methods in AD analysis. The codes are publicly available at https://github.com/Liu1436510768/HGM-cNet.git.


Assuntos
Encefalopatias , Aprendizado Profundo , Substância Cinzenta , Hipocampo , Humanos , Aprendizado Profundo/normas , Substância Cinzenta/diagnóstico por imagem , Hipocampo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Masculino , Feminino , Idoso , Idoso de 80 Anos ou mais , Probabilidade , Encefalopatias/diagnóstico por imagem
3.
Eur Radiol ; 33(5): 3544-3556, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36538072

RESUMO

OBJECTIVES: To evaluate AI biases and errors in estimating bone age (BA) by comparing AI and radiologists' clinical determinations of BA. METHODS: We established three deep learning models from a Chinese private dataset (CHNm), an American public dataset (USAm), and a joint dataset combining the above two datasets (JOIm). The test data CHNt (n = 1246) were labeled by ten senior pediatric radiologists. The effects of data site differences, interpretation bias, and interobserver variability on BA assessment were evaluated. The differences between the AI models' and radiologists' clinical determinations of BA (normal, advanced, and delayed BA groups by using the Brush data) were evaluated by the chi-square test and Kappa values. The heatmaps of CHNm-CHNt were generated by using Grad-CAM. RESULTS: We obtained an MAD value of 0.42 years on CHNm-CHNt; this result indicated an appropriate accuracy for the whole group but did not indicate an accurate estimation of individual BA because with a kappa value of 0.714, the agreement between AI and human clinical determinations of BA was significantly different. The features of the heatmaps were not fully consistent with the human vision on the X-ray films. Variable performance in BA estimation by different AI models and the disagreement between AI and radiologists' clinical determinations of BA may be caused by data biases, including patients' sex and age, institutions, and radiologists. CONCLUSIONS: The deep learning models outperform external validation in predicting BA on both internal and joint datasets. However, the biases and errors in the models' clinical determinations of child development should be carefully considered. KEY POINTS: • With a kappa value of 0.714, clinical determinations of bone age by using AI did not accord well with clinical determinations by radiologists. • Several biases, including patients' sex and age, institutions, and radiologists, may cause variable performance by AI bone age models and disagreement between AI and radiologists' clinical determinations of bone age. • AI heatmaps of bone age were not fully consistent with human vision on X-ray films.


Assuntos
Determinação da Idade pelo Esqueleto , Simulação por Computador , Aprendizado Profundo , Criança , Humanos , Viés , Aprendizado Profundo/normas , Radiologistas/normas , Estados Unidos , Determinação da Idade pelo Esqueleto/métodos , Determinação da Idade pelo Esqueleto/normas , Punho/diagnóstico por imagem , Dedos/diagnóstico por imagem , Masculino , Feminino , Pré-Escolar , Adolescente , Variações Dependentes do Observador , Erros de Diagnóstico , Simulação por Computador/normas
4.
Eur Radiol ; 33(4): 2519-2528, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36371606

RESUMO

OBJECTIVES: Prostate volume (PV) in combination with prostate specific antigen (PSA) yields PSA density which is an increasingly important biomarker. Calculating PV from MRI is a time-consuming, radiologist-dependent task. The aim of this study was to assess whether a deep learning algorithm can replace PI-RADS 2.1 based ellipsoid formula (EF) for calculating PV. METHODS: Eight different measures of PV were retrospectively collected for each of 124 patients who underwent radical prostatectomy and preoperative MRI of the prostate (multicenter and multi-scanner MRI's 1.5 and 3 T). Agreement between volumes obtained from the deep learning algorithm (PVDL) and ellipsoid formula by two radiologists (PVEF1 and PVEF2) was evaluated against the reference standard PV obtained by manual planimetry by an expert radiologist (PVMPE). A sensitivity analysis was performed using a prostatectomy specimen as the reference standard. Inter-reader agreement was evaluated between the radiologists using the ellipsoid formula and between the expert and inexperienced radiologists performing manual planimetry. RESULTS: PVDL showed better agreement and precision than PVEF1 and PVEF2 using the reference standard PVMPE (mean difference [95% limits of agreement] PVDL: -0.33 [-10.80; 10.14], PVEF1: -3.83 [-19.55; 11.89], PVEF2: -3.05 [-18.55; 12.45]) or the PV determined based on specimen weight (PVDL: -4.22 [-22.52; 14.07], PVEF1: -7.89 [-30.50; 14.73], PVEF2: -6.97 [-30.13; 16.18]). Inter-reader agreement was excellent between the two experienced radiologists using the ellipsoid formula and was good between expert and inexperienced radiologists performing manual planimetry. CONCLUSION: Deep learning algorithm performs similarly to radiologists in the assessment of prostate volume on MRI. KEY POINTS: • A commercially available deep learning algorithm performs similarly to radiologists in the assessment of prostate volume on MRI. • The deep-learning algorithm was previously untrained on this heterogenous multicenter day-to-day practice MRI data set.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Próstata , Neoplasias da Próstata , Radiologistas , Humanos , Masculino , Algoritmos , Aprendizado Profundo/normas , Próstata/anatomia & histologia , Próstata/diagnóstico por imagem , Próstata/patologia , Antígeno Prostático Específico , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Estudos Retrospectivos , Variações Dependentes do Observador , Sensibilidade e Especificidade , Tamanho do Órgão
6.
Comput Intell Neurosci ; 2022: 9602631, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35330594

RESUMO

Aiming at the problem of fatigue state detection at the back of sports, a cascade deep learning detection system structure is designed, and a convolutional neural network fatigue state detection model based on multiscale pooling is proposed. Firstly, face detection is carried out by a deep learning model MTCNN to extract eye and mouth regions. Aiming at the problem of eye and mouth state representation and recognition, a multiscale pooling model (MSP) based on RESNET is proposed to train the eye and mouth state. In real-time detection, the state of the eye and mouth region is recognized through the trained convolution neural network model. Finally, the athlete's fatigue is determined based on PERCLOS and the proposed mouth opening and closing frequency (FOM). The experimental results show that in the training process, we set the batch_size = 100 and the initial learning rate = 0.01. When the evaluation index is no longer improved, the learning rate is reduced by 10 times to 0.001, and a total of 50 epochs are trained. The precision and recall of the system are high. Compared with the infrared image simulating the night state, the RGB image taken by the ordinary camera in the daytime has higher precision and recall. It is proven that the neural network has high detection accuracy, meets the real-time requirements, and has high robustness in complex environments.


Assuntos
Face , Fadiga , Movimento (Física) , Redes Neurais de Computação , Atletas/psicologia , Aprendizado Profundo/normas , Olho , Humanos , Boca , Esportes/psicologia
7.
Cancer Biomark ; 33(2): 185-198, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35213361

RESUMO

BACKGROUND: With the use of artificial intelligence and machine learning techniques for biomedical informatics, security and privacy concerns over the data and subject identities have also become an important issue and essential research topic. Without intentional safeguards, machine learning models may find patterns and features to improve task performance that are associated with private personal information. OBJECTIVE: The privacy vulnerability of deep learning models for information extraction from medical textural contents needs to be quantified since the models are exposed to private health information and personally identifiable information. The objective of the study is to quantify the privacy vulnerability of the deep learning models for natural language processing and explore a proper way of securing patients' information to mitigate confidentiality breaches. METHODS: The target model is the multitask convolutional neural network for information extraction from cancer pathology reports, where the data for training the model are from multiple state population-based cancer registries. This study proposes the following schemes to collect vocabularies from the cancer pathology reports; (a) words appearing in multiple registries, and (b) words that have higher mutual information. We performed membership inference attacks on the models in high-performance computing environments. RESULTS: The comparison outcomes suggest that the proposed vocabulary selection methods resulted in lower privacy vulnerability while maintaining the same level of clinical task performance.


Assuntos
Confidencialidade , Aprendizado Profundo , Armazenamento e Recuperação da Informação/métodos , Processamento de Linguagem Natural , Neoplasias/epidemiologia , Inteligência Artificial , Aprendizado Profundo/normas , Humanos , Neoplasias/patologia , Sistema de Registros
8.
Oxid Med Cell Longev ; 2022: 4378413, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35035662

RESUMO

BACKGROUND: Vascular calcification (VC) constitutes subclinical vascular burden and increases cardiovascular mortality. Effective therapeutics for VC remains to be procured. We aimed to use a deep learning-based strategy to screen and uncover plant compounds that potentially can be repurposed for managing VC. METHODS: We integrated drugome, interactome, and diseasome information from Comparative Toxicogenomic Database (CTD), DrugBank, PubChem, Gene Ontology (GO), and BioGrid to analyze drug-disease associations. A deep representation learning was done using a high-level description of the local network architecture and features of the entities, followed by learning the global embeddings of nodes derived from a heterogeneous network using the graph neural network architecture and a random forest classifier established for prediction. Predicted results were tested in an in vitro VC model for validity based on the probability scores. RESULTS: We collected 6,790 compounds with available Simplified Molecular-Input Line-Entry System (SMILES) data, 11,958 GO terms, 7,238 diseases, and 25,482 proteins, followed by local embedding vectors using an end-to-end transformer network and a node2vec algorithm and global embedding vectors learned from heterogeneous network via the graph neural network. Our algorithm conferred a good distinction between potential compounds, presenting as higher prediction scores for the compound categories with a higher potential but lower scores for other categories. Probability score-dependent selection revealed that antioxidants such as sulforaphane and daidzein were potentially effective compounds against VC, while catechin had low probability. All three compounds were validated in vitro. CONCLUSIONS: Our findings exemplify the utility of deep learning in identifying promising VC-treating plant compounds. Our model can be a quick and comprehensive computational screening tool to assist in the early drug discovery process.


Assuntos
Simulação por Computador/normas , Aprendizado Profundo/normas , Aprendizado de Máquina/normas , Plantas/química , Calcificação Vascular/terapia , Algoritmos , Humanos
9.
Clin Epigenetics ; 14(1): 11, 2022 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-35045866

RESUMO

BACKGROUND: Heart failure with preserved ejection fraction (HFpEF), affected collectively by genetic and environmental factors, is the common subtype of chronic heart failure. Although the available risk assessment methods for HFpEF have achieved some progress, they were based on clinical or genetic features alone. Here, we have developed a deep learning framework, HFmeRisk, using both 5 clinical features and 25 DNA methylation loci to predict the early risk of HFpEF in the Framingham Heart Study Cohort. RESULTS: The framework incorporates Least Absolute Shrinkage and Selection Operator and Extreme Gradient Boosting-based feature selection, as well as a Factorization-Machine based neural network-based recommender system. Model discrimination and calibration were assessed using the AUC and Hosmer-Lemeshow test. HFmeRisk, including 25 CpGs and 5 clinical features, have achieved the AUC of 0.90 (95% confidence interval 0.88-0.92) and Hosmer-Lemeshow statistic was 6.17 (P = 0.632), which outperformed models with clinical characteristics or DNA methylation levels alone, published chronic heart failure risk prediction models and other benchmark machine learning models. Out of them, the DNA methylation levels of two CpGs were significantly correlated with the paired transcriptome levels (R < -0.3, P < 0.05). Besides, DNA methylation locus in HFmeRisk were associated with intercellular signaling and interaction, amino acid metabolism, transport and activation and the clinical variables were all related with the mechanism of occurrence of HFpEF. Together, these findings give new evidence into the HFmeRisk model. CONCLUSION: Our study proposes an early risk assessment framework for HFpEF integrating both clinical and epigenetic features, providing a promising path for clinical decision making.


Assuntos
Aprendizado Profundo/normas , Insuficiência Cardíaca/diagnóstico , Medição de Risco/métodos , Volume Sistólico/fisiologia , Idoso , Metilação de DNA/genética , Metilação de DNA/fisiologia , Aprendizado Profundo/estatística & dados numéricos , Feminino , Insuficiência Cardíaca/fisiopatologia , Insuficiência Cardíaca/prevenção & controle , Humanos , Masculino , Pessoa de Meia-Idade , Prognóstico , Medição de Risco/estatística & dados numéricos , Volume Sistólico/genética
10.
Eur J Cancer ; 160: 80-91, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34810047

RESUMO

BACKGROUND: Over the past decade, the development of molecular high-throughput methods (omics) increased rapidly and provided new insights for cancer research. In parallel, deep learning approaches revealed the enormous potential for medical image analysis, especially in digital pathology. Combining image and omics data with deep learning tools may enable the discovery of new cancer biomarkers and a more precise prediction of patient prognosis. This systematic review addresses different multimodal fusion methods of convolutional neural network-based image analyses with omics data, focussing on the impact of data combination on the classification performance. METHODS: PubMed was screened for peer-reviewed articles published in English between January 2015 and June 2021 by two independent researchers. Search terms related to deep learning, digital pathology, omics, and multimodal fusion were combined. RESULTS: We identified a total of 11 studies meeting the inclusion criteria, namely studies that used convolutional neural networks for haematoxylin and eosin image analysis of patients with cancer in combination with integrated omics data. Publications were categorised according to their endpoints: 7 studies focused on survival analysis and 4 studies on prediction of cancer subtypes, malignancy or microsatellite instability with spatial analysis. CONCLUSIONS: Image-based classifiers already show high performances in prognostic and predictive cancer diagnostics. The integration of omics data led to improved performance in all studies described here. However, these are very early studies that still require external validation to demonstrate their generalisability and robustness. Further and more comprehensive studies with larger sample sizes are needed to evaluate performance and determine clinical benefits.


Assuntos
Aprendizado Profundo/normas , Genômica/métodos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias/genética , Humanos , Neoplasias/patologia
11.
Circulation ; 145(2): 134-150, 2022 01 11.
Artigo em Inglês | MEDLINE | ID: mdl-34743558

RESUMO

BACKGROUND: The microvasculature, the smallest blood vessels in the body, has key roles in maintenance of organ health and tumorigenesis. The retinal fundus is a window for human in vivo noninvasive assessment of the microvasculature. Large-scale complementary machine learning-based assessment of the retinal vasculature with phenome-wide and genome-wide analyses may yield new insights into human health and disease. METHODS: We used 97 895 retinal fundus images from 54 813 UK Biobank participants. Using convolutional neural networks to segment the retinal microvasculature, we calculated vascular density and fractal dimension as a measure of vascular branching complexity. We associated these indices with 1866 incident International Classification of Diseases-based conditions (median 10-year follow-up) and 88 quantitative traits, adjusting for age, sex, smoking status, and ethnicity. RESULTS: Low retinal vascular fractal dimension and density were significantly associated with higher risks for incident mortality, hypertension, congestive heart failure, renal failure, type 2 diabetes, sleep apnea, anemia, and multiple ocular conditions, as well as corresponding quantitative traits. Genome-wide association of vascular fractal dimension and density identified 7 and 13 novel loci, respectively, that were enriched for pathways linked to angiogenesis (eg, vascular endothelial growth factor, platelet-derived growth factor receptor, angiopoietin, and WNT signaling pathways) and inflammation (eg, interleukin, cytokine signaling). CONCLUSIONS: Our results indicate that the retinal vasculature may serve as a biomarker for future cardiometabolic and ocular disease and provide insights into genes and biological pathways influencing microvascular indices. Moreover, such a framework highlights how deep learning of images can quantify an interpretable phenotype for integration with electronic health record, biomarker, and genetic data to inform risk prediction and risk modification.


Assuntos
Aprendizado Profundo/normas , Estudo de Associação Genômica Ampla/métodos , Genômica/métodos , Análise da Randomização Mendeliana/métodos , Microvasos/patologia , Retina/metabolismo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
12.
Circulation ; 145(2): 122-133, 2022 01 11.
Artigo em Inglês | MEDLINE | ID: mdl-34743566

RESUMO

BACKGROUND: Artificial intelligence (AI)-enabled analysis of 12-lead ECGs may facilitate efficient estimation of incident atrial fibrillation (AF) risk. However, it remains unclear whether AI provides meaningful and generalizable improvement in predictive accuracy beyond clinical risk factors for AF. METHODS: We trained a convolutional neural network (ECG-AI) to infer 5-year incident AF risk using 12-lead ECGs in patients receiving longitudinal primary care at Massachusetts General Hospital (MGH). We then fit 3 Cox proportional hazards models, composed of ECG-AI 5-year AF probability, CHARGE-AF clinical risk score (Cohorts for Heart and Aging in Genomic Epidemiology-Atrial Fibrillation), and terms for both ECG-AI and CHARGE-AF (CH-AI), respectively. We assessed model performance by calculating discrimination (area under the receiver operating characteristic curve) and calibration in an internal test set and 2 external test sets (Brigham and Women's Hospital [BWH] and UK Biobank). Models were recalibrated to estimate 2-year AF risk in the UK Biobank given limited available follow-up. We used saliency mapping to identify ECG features most influential on ECG-AI risk predictions and assessed correlation between ECG-AI and CHARGE-AF linear predictors. RESULTS: The training set comprised 45 770 individuals (age 55±17 years, 53% women, 2171 AF events) and the test sets comprised 83 162 individuals (age 59±13 years, 56% women, 2424 AF events). Area under the receiver operating characteristic curve was comparable using CHARGE-AF (MGH, 0.802 [95% CI, 0.767-0.836]; BWH, 0.752 [95% CI, 0.741-0.763]; UK Biobank, 0.732 [95% CI, 0.704-0.759]) and ECG-AI (MGH, 0.823 [95% CI, 0.790-0.856]; BWH, 0.747 [95% CI, 0.736-0.759]; UK Biobank, 0.705 [95% CI, 0.673-0.737]). Area under the receiver operating characteristic curve was highest using CH-AI (MGH, 0.838 [95% CI, 0.807 to 0.869]; BWH, 0.777 [95% CI, 0.766 to 0.788]; UK Biobank, 0.746 [95% CI, 0.716 to 0.776]). Calibration error was low using ECG-AI (MGH, 0.0212; BWH, 0.0129; UK Biobank, 0.0035) and CH-AI (MGH, 0.012; BWH, 0.0108; UK Biobank, 0.0001). In saliency analyses, the ECG P-wave had the greatest influence on AI model predictions. ECG-AI and CHARGE-AF linear predictors were correlated (Pearson r: MGH, 0.61; BWH, 0.66; UK Biobank, 0.41). CONCLUSIONS: AI-based analysis of 12-lead ECGs has similar predictive usefulness to a clinical risk factor model for incident AF and the approaches are complementary. ECG-AI may enable efficient quantification of future AF risk.


Assuntos
Fibrilação Atrial/diagnóstico , Aprendizado Profundo/normas , Eletrocardiografia/métodos , Fibrilação Atrial/patologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fatores de Risco
13.
J Invest Dermatol ; 142(1): 97-103, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34265329

RESUMO

Basal cell carcinoma (BCC) is the most common skin cancer, with over 2 million cases diagnosed annually in the United States. Conventionally, BCC is diagnosed by naked eye examination and dermoscopy. Suspicious lesions are either removed or biopsied for histopathological confirmation, thus lowering the specificity of noninvasive BCC diagnosis. Recently, reflectance confocal microscopy, a noninvasive diagnostic technique that can image skin lesions at cellular level resolution, has shown to improve specificity in BCC diagnosis and reduced the number needed to biopsy by 2-3 times. In this study, we developed and evaluated a deep learning-based artificial intelligence model to automatically detect BCC in reflectance confocal microscopy images. The proposed model achieved an area under the curve for the receiver operator characteristic curve of 89.7% (stack level) and 88.3% (lesion level), a performance on par with that of reflectance confocal microscopy experts. Furthermore, the model achieved an area under the curve of 86.1% on a held-out test set from international collaborators, demonstrating the reproducibility and generalizability of the proposed automated diagnostic approach. These results provide a clear indication that the clinical deployment of decision support systems for the detection of BCC in reflectance confocal microscopy images has the potential for optimizing the evaluation and diagnosis of patients with skin cancer.


Assuntos
Carcinoma Basocelular/diagnóstico , Aprendizado Profundo/normas , Neoplasias Cutâneas/diagnóstico , Adulto , Idoso , Idoso de 80 Anos ou mais , Inteligência Artificial , Automação , Biópsia , Dermoscopia/métodos , Feminino , Humanos , Masculino , Microscopia Confocal , Pessoa de Meia-Idade , Modelos Biológicos , Exame Físico , Reprodutibilidade dos Testes
14.
Nat Commun ; 12(1): 6311, 2021 11 02.
Artigo em Inglês | MEDLINE | ID: mdl-34728629

RESUMO

Machine-assisted pathological recognition has been focused on supervised learning (SL) that suffers from a significant annotation bottleneck. We propose a semi-supervised learning (SSL) method based on the mean teacher architecture using 13,111 whole slide images of colorectal cancer from 8803 subjects from 13 independent centers. SSL (~3150 labeled, ~40,950 unlabeled; ~6300 labeled, ~37,800 unlabeled patches) performs significantly better than the SL. No significant difference is found between SSL (~6300 labeled, ~37,800 unlabeled) and SL (~44,100 labeled) at patch-level diagnoses (area under the curve (AUC): 0.980 ± 0.014 vs. 0.987 ± 0.008, P value = 0.134) and patient-level diagnoses (AUC: 0.974 ± 0.013 vs. 0.980 ± 0.010, P value = 0.117), which is close to human pathologists (average AUC: 0.969). The evaluation on 15,000 lung and 294,912 lymph node images also confirm SSL can achieve similar performance as that of SL with massive annotations. SSL dramatically reduces the annotations, which has great potential to effectively build expert-level pathological artificial intelligence platforms in practice.


Assuntos
Inteligência Artificial/normas , Neoplasias Colorretais/patologia , Aprendizado Profundo/normas , Neoplasias Pulmonares/patologia , Aprendizado de Máquina Supervisionado/normas , Neoplasias Colorretais/classificação , Neoplasias Colorretais/diagnóstico por imagem , Humanos , Neoplasias Pulmonares/classificação , Neoplasias Pulmonares/diagnóstico por imagem , Metástase Linfática , Redes Neurais de Computação , Curva ROC
17.
JCI Insight ; 6(21)2021 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-34591793

RESUMO

Obesity is one of the main drivers of type 2 diabetes, but it is not uniformly associated with the disease. The location of fat accumulation is critical for metabolic health. Specific patterns of body fat distribution, such as visceral fat, are closely related to insulin resistance. There might be further, hitherto unknown, features of body fat distribution that could additionally contribute to the disease. We used machine learning with dense convolutional neural networks to detect diabetes-related variables from 2371 T1-weighted whole-body MRI data sets. MRI was performed in participants undergoing metabolic screening with oral glucose tolerance tests. Models were trained for sex, age, BMI, insulin sensitivity, HbA1c, and prediabetes or incident diabetes. The results were compared with those of conventional models. The area under the receiver operating characteristic curve was 87% for the type 2 diabetes discrimination and 68% for prediabetes, both superior to conventional models. Mean absolute regression errors were comparable to those of conventional models. Heatmaps showed that lower visceral abdominal regions were critical in diabetes classification. Subphenotyping revealed a group with high future diabetes and microalbuminuria risk.Our results show that diabetes is detectable from whole-body MRI without additional data. Our technique of heatmap visualization identifies plausible anatomical regions and highlights the leading role of fat accumulation in the lower abdomen in diabetes pathogenesis.


Assuntos
Aprendizado Profundo/normas , Diabetes Mellitus Tipo 2/diagnóstico por imagem , Diabetes Mellitus Tipo 2/diagnóstico , Aprendizado de Máquina/normas , Imageamento por Ressonância Magnética/métodos , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
18.
PLoS One ; 16(8): e0256111, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34398931

RESUMO

STUDY OBJECTIVES: Development of inter-database generalizable sleep staging algorithms represents a challenge due to increased data variability across different datasets. Sharing data between different centers is also a problem due to potential restrictions due to patient privacy protection. In this work, we describe a new deep learning approach for automatic sleep staging, and address its generalization capabilities on a wide range of public sleep staging databases. We also examine the suitability of a novel approach that uses an ensemble of individual local models and evaluate its impact on the resulting inter-database generalization performance. METHODS: A general deep learning network architecture for automatic sleep staging is presented. Different preprocessing and architectural variant options are tested. The resulting prediction capabilities are evaluated and compared on a heterogeneous collection of six public sleep staging datasets. Validation is carried out in the context of independent local and external dataset generalization scenarios. RESULTS: Best results were achieved using the CNN_LSTM_5 neural network variant. Average prediction capabilities on independent local testing sets achieved 0.80 kappa score. When individual local models predict data from external datasets, average kappa score decreases to 0.54. Using the proposed ensemble-based approach, average kappa performance on the external dataset prediction scenario increases to 0.62. To our knowledge this is the largest study by the number of datasets so far on validating the generalization capabilities of an automatic sleep staging algorithm using external databases. CONCLUSIONS: Validation results show good general performance of our method, as compared with the expected levels of human agreement, as well as to state-of-the-art automatic sleep staging methods. The proposed ensemble-based approach enables flexible and scalable design, allowing dynamic integration of local models into the final ensemble, preserving data locality, and increasing generalization capabilities of the resulting system at the same time.


Assuntos
Bases de Dados Factuais/normas , Aprendizado Profundo/normas , Eletroencefalografia/métodos , Redes Neurais de Computação , Polissonografia/métodos , Fases do Sono/fisiologia , Sono/fisiologia , Algoritmos , Humanos
19.
Eur J Cancer ; 155: 200-215, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34391053

RESUMO

BACKGROUND: Gastrointestinal cancers account for approximately 20% of all cancer diagnoses and are responsible for 22.5% of cancer deaths worldwide. Artificial intelligence-based diagnostic support systems, in particular convolutional neural network (CNN)-based image analysis tools, have shown great potential in medical computer vision. In this systematic review, we summarise recent studies reporting CNN-based approaches for digital biomarkers for characterization and prognostication of gastrointestinal cancer pathology. METHODS: Pubmed and Medline were screened for peer-reviewed papers dealing with CNN-based gastrointestinal cancer analyses from histological slides, published between 2015 and 2020.Seven hundred and ninety titles and abstracts were screened, and 58 full-text articles were assessed for eligibility. RESULTS: Sixteen publications fulfilled our inclusion criteria dealing with tumor or precursor lesion characterization or prognostic and predictive biomarkers: 14 studies on colorectal or rectal cancer, three studies on gastric cancer and none on esophageal cancer. These studies were categorised according to their end-points: polyp characterization, tumor characterization and patient outcome. Regarding the translation into clinical practice, we identified several studies demonstrating generalization of the classifier with external tests and comparisons with pathologists, but none presenting clinical implementation. CONCLUSIONS: Results of recent studies on CNN-based image analysis in gastrointestinal cancer pathology are promising, but studies were conducted in observational and retrospective settings. Large-scale trials are needed to assess performance and predict clinical usefulness. Furthermore, large-scale trials are required for approval of CNN-based prediction models as medical devices.


Assuntos
Aprendizado Profundo/normas , Neoplasias Gastrointestinais/classificação , Neoplasias Gastrointestinais/patologia , Humanos , Resultado do Tratamento
20.
Curr Opin Ophthalmol ; 32(5): 452-458, 2021 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-34231530

RESUMO

PURPOSE OF REVIEW: In this article, we introduce the concept of model interpretability, review its applications in deep learning models for clinical ophthalmology, and discuss its role in the integration of artificial intelligence in healthcare. RECENT FINDINGS: The advent of deep learning in medicine has introduced models with remarkable accuracy. However, the inherent complexity of these models undermines its users' ability to understand, debug and ultimately trust them in clinical practice. Novel methods are being increasingly explored to improve models' 'interpretability' and draw clearer associations between their outputs and features in the input dataset. In the field of ophthalmology, interpretability methods have enabled users to make informed adjustments, identify clinically relevant imaging patterns, and predict outcomes in deep learning models. SUMMARY: Interpretability methods support the transparency necessary to implement, operate and modify complex deep learning models. These benefits are becoming increasingly demonstrated in models for clinical ophthalmology. As quality standards for deep learning models used in healthcare continue to evolve, interpretability methods may prove influential in their path to regulatory approval and acceptance in clinical practice.


Assuntos
Aprendizado Profundo , Oftalmologia , Inteligência Artificial , Competência Clínica , Simulação por Computador/normas , Aprendizado Profundo/normas , Diagnóstico por Imagem , Humanos , Oftalmologia/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...